|
|
On Mon, 10 Jan 2000 22:25:48 -0600, Mark Gordon wrote:
>There are two ways I do this: rendering a bunch of stuff with low
>resolution just to see whether it works (I've known the render to fail,
>and I've known the render to hang). Running allscene.sh on my machine
>typically takes around 25 minutes. Other methods involve rendering at
>larger resolution (over a period of days) and visually eyeballing for
>unusual artifacts. I've seen all the scenes in the standard
>distribution enough to have some idea if something goes completely
>screwy, and I also have "canonical" high-resolution versions saved on my
>hard drive. The MegaPov example scenes are not so familiar.
The reason I asked is because it's something I've thought about myself.
I think it should be relatively easy to work out an automated solution
for the cases where there aren't any failures:
Use pnminvert to invert the canonical copy of the image, then use ppmmix
to mix 50% of it with 50% of the test image. Finally, run ppmhist on
the result and examine its output to see how many pixels are significantly
different from solid 50% gray. If it's over a certain threshold, you can
guarantee there's been a significant change in how the image was rendered.
Adjusting the threshold and the definition of "significantly different from
solid 50% gray" allows you to detect smaller and smaller differences in
rendering, up to and including perfect reproduction of the original image.
So you can sort the images into categories: "really different - probably a bug",
"A little different", and "perfect".
--
These are my opinions. I do NOT speak for the POV-Team.
The superpatch: http://www2.fwi.com/~parkerr/superpatch/
My other stuff: http://www2.fwi.com/~parkerr/traces.html
Post a reply to this message
|
|